Banner

News

Article

Protecting your practice: cybersecurity defenses in the age of generative AI

Generative AI offers many benefits for physicians, but also brings new risks of data breaches

Sanjay Bhakta, MBA credit: Centific

Sanjay Bhakta, MBA

The devastating cyberattack on Change Healthcare continues to cause ripple effects across the health care system. Change Healthcare handles a significant portion of the billing infrastructure for many physicians and practices. The attack has caused delays in processing claims and getting paid, causing financial strain for physicians among other major disruptions.

The hack has been so disruptive to physicians that the U.S. Department of Health and Human Services has stepped in to accelerate reimbursements to physicians who are exhausting their cash reserves. The incident is a wake-up call to physicians: they, too, are very vulnerable to cyberattacks. And as more physicians adopt generative AI to assist them, the risks are increasing. It’s time for physicians to protect themselves.

Generative AI’s risks to physicians

According to the IBM Average Cost of a Cybersecurity Breach report, for the 13th year in a row the health care industry reported the most expensive data breaches, at an average cost of $10.93 million. Physicians as well as hospitals pay that cost. Protecting patient data is especially challenging for health care. Breaches expose sensitive information, harming trust and potentially leading to legal issues and identity theft for patients, while physicians risk reputational damage and professional consequences.

Physicians are vulnerable for a few other reasons. They are increasingly using telemedicine, raising concerns about the security of video consultations and transmitting patient data over less secure networks. And here’s a very big issue: physicians are increasingly relying on generative AI to do their work. This is generally a good thing, since generative AI can improve work processes ranging from clinical note drafting to personalized patient education.

But generative AI is also fraught with risk. Here’s why:

Data sensitivity: generative AI models are often trained on massive amounts of sensitive patient data. Any vulnerability in AI systems could expose this data to breaches.

Third-party risks: many physicians use cloud-based generative AI tools. This introduces reliance on third-party security measures, so vulnerabilities in those vendors become risk points.

Model manipulation: bad actors could potentially manipulate the training data or the AI models themselves. This could lead to incorrect clinical notes, biased research results, or the generation of harmful content.

Integration vulnerabilities: integrating generative AI tools into existing healthcare systems creates additional points of entry for cyberattacks. If these integrations aren’t secure, patient data could be jeopardized.

For example, let’s look at the hypothetical example of Sarah Wilson, M.D., a busy cardiologist who relies on generative AI for tasks like clinical notes, and an AI chatbot for patient scheduling and FAQs. However, her reliance proves costly when she unknowingly downloads malware disguised as a realistic email from her IT department. This malware infects her AI systems. During a patient appointment, her clinical note generator suggests unusual, incorrect information. Realizing something is amiss, she discovers the malware has not only manipulated the AI’s training data but also allowed patient records to be exposed on the dark web.

Why would someone do that? Many times, the motivation is a ransomware attack that will require Dr. Wilson to pay an enormous amount of money to prevent additional disruptions. Or a malicious actor could have an axe to grind against physicians. Whatever the reason, Dr. Wilson’s practice is now in a world of hurt.

How physicians can protect themselves

How might Dr. Wilson and physicians like her safeguard herself and her practice? Here are some recommended steps:

  • Anticipate generative AI risks. Effective cybersecurity defense will always come down to anticipating how bad actors work so as to stay a step ahead of them. This means thinking like they think and fighting fire with fire. Tools such as Caldera and Counterfeit help any business (including health care organizations) test generative AI.
  • Implement a zero-trust architecture (ZTA). With ZTA, access to data is granted on a need-to-know basis and is constantly verified. A business employing ZTA protects its systems with a far greater level of rigor. Companies such as NVIDIA offer tools to help businesses implement ZTA.
  • Embrace data loss prevention (DLP). DLP prevents the unauthorized use of sensitive information. DLP classifies sensitive data, monitors channels and devices for behavior that might indicate data is being shared or accessed inappropriately, and prevents data loss.
  • Test your security with Purple Teaming. Purple Teaming is a collaborative approach that strengthens an organization’s security posture by having a single team simulate both cyberattacks and defenses. This allows for more realistic and comprehensive breach simulations
  • Do regular security audits: conduct security audits on AI systems and their integration points with existing healthcare IT infrastructure.
  • Train staff: educate physicians and staff members on the cybersecurity risks associated with generative AI and how to identify potential threats.

Cybersecurity attacks are not going away just as crime remains a reality. But physicians and health care organizations can take steps to protect themselves as they adopt generative AI.

Sanjay Bhakta, MBA, is vice president and head of solutions at Centific.

Related Videos
Dermasensor
Emma Schuering: ©Polsinelli
Emma Schuering: ©Polsinelli